23 research outputs found

    Analysis of key aspects to manage Wireless Sensor Networks in Ambient Assisted Living environments

    Get PDF
    Wireless Sensor Networks (WSN) based on ZigBee/IEEE 802.15.4 will be key enablers of non-invasive, highly sensitive infrastructures to support the provision of future ambient assisted living services. This paper addresses the main design concerns and requirements when conceiving ambient care systems (ACS), frameworks to provide remote monitoring, emergency detection, activity logging and personal notifications dispatching services. In particular, the paper describes the design of an ACS built on top of a WSN composed of Crossbow's MICAz devices, external sensors and PDAs enabled with ZigBee technology. The middleware is integrated in an OSGi framework that processes the acquired information to provide ambient services and also enables smart network control. From our experience, we consider that in a future, the combination of ZigBee technology together with a service oriented architecture may be a versatile approach to AAL services offering, both from the technical and business points of view

    Enhancing Interaction With Smart Objects Throught Mobile Devices

    Get PDF
    Interaction with smart objects can be accomplished with different technologies, such as tangible interfaces or touch computing, among others. Some of them require the object to be especially designed to be 'smart', and some other are limited in the variety and complexity of the possible actions. This paper describes a user-smart object interaction model and prototype based on the well known event-condition-action (ECA) reasoning, which can work, to a degree, independently of the intelligence embedded into the smart object. It has been designed for mobile devices to act as mediators between users and smart objects and provides an intuitive means for personalization of object's behavior. When the user is close to an object, this one publishes its 'event & action' capabilities to the user's device. The user may accept the object's module offering, which will enable him to configure and control that object, but also its actions with respect to other elements of the environment or the virtual world. The modular ECA interaction model facilitates the integration of different types of objects in a smart space, giving the user full control of their capabilities and facilitating creative mash-uping to build customized functionalities that combine physical and virtual action

    An RFID-enabled framework to support Ambient Home Care Services

    Get PDF
    The growing number of elderly in modern societies is encouraging advances in remote assistive solutions to enable sustainable and safe ‘ageing in place’. Among the many technologies which may serve to support Ambient Home Care Systems, RFID is offering a set of differential features which make it suitable to build new interaction schemes while supporting horizontal system’s features such as localization. This paper details the design of a passive RFID-based AHCS, composed by an infrastructure of mobile and static tags and readers controlled by a SOA (service oriented architecture) middleware. The technology possibilities, its drawbacks and integration problems in this application domain are described from a practical approach

    A multimodal interaction system for big displays

    Get PDF
    Big displays and ultrawalls are increasingly present in nowadays environments (e.g. in city spaces, buildings, transportation means, teaching rooms, operation rooms, convention centers, etc.), at the same time that they are widely used as tools for collaborative work, monitoring and control in many other contexts. How to enhance interaction with big displays to make it more natural and fluent is still an open challenge. This paper presents a system for multimodal interaction based on pointing and speech recognition. The system makes possible for the user to control the big display through a combination of pointing gestures and a set of control commands built on a predefined vocabulary. The system is already prototyped and being used for service demonstrations for different applications

    A model for mobile-instrumented interaction and object orchestration in smart environments

    Full text link
    The proliferation of the smartphones has given a considerable boost to the spread of the smart objects and the consequent creation of smart spaces. Smart objects are electronic devices that are able to work interactively and autonomously, usually preserving the interaction metaphor of their non-electronic counterpart. Through a network interface, they can cooperate with other objects: their strengths do not lie in their hardware, but in the capabilities to manage interactions among them and in the resulting orchestrated behaviour. Smart spaces are environments composed of smart devices, where they work together, producing some behaviour of benefit to the dwellers. The current workflow requires that the user downloads an application from a digital distribution platform for each smart object he wants to use. This model converts the smartphone in a control centre, but it limits the potentialities of the objects. Devices are connected in a one-to-one network with the smartphone, a configuration that prevents the direct communication among the objects and that puts the responsibility to coordinate the objects in the hands of the smartphone. Moreover, there are only a few frameworks that permit the integration of several applications and the creation of complex behaviours that involve many objects from different manufacturers. The first challenge considered in this thesis is to propose a new workflow that permits to integrate any kind of smart device in any behaviour of the smart space. The workflow will include the discovery of the new objects and their configuration, without the need of downloading a new standalone application for every object. It will provide to the user a simple configuration tool to create personalized behaviours (scenes), based on the event-condition-action paradigm. Finally, it will automatically orchestrate the smart devices to produce the desired behaviours of the environment. Smart spaces are thought to behave in a personalized way, adapting to the particularities of their inhabitants. “Personalization” is about understanding the needs of each individual and helping satisfy a goal that efficiently and knowledgeably addresses each individual’s need in a given context. Thus, the second challenge tackled in this thesis is to move forward on how to evolve smart spaces from customizable to personalized environments. The third open issue considered in this research is how to make portable the personalized configuration, i.e. how to enable a user to commute among different smart spaces preserving and adapting his personalized settings to each of them. Both personalization and portability will be included in the tool to automatically aid the user to gain a full and transparent control over the environment. Solutions to tackle device fragmentation, interoperability, seamless discovery, scene modelling, orchestration and reasoning are needed to achieve these goals. In this context, the contributions of the thesis can be summarized as follows: - The definition of a workflow that permits to personalize and control a smart space using the event-condition-action paradigm. - The design of a model to describe any kind of smart object and its capabilities. - The extension of the previous model to describe a smart space as a composition of its smart objects. - The application of the model to the workflow, reinterpreting the main Object- Oriented Programming features and using them to describe the interactions between objects and the recommendation process. - The proposal of an architecture that implements each step of the workflow and its relations with the model. - The proposal, development and evaluation of service concepts in real smart space settings

    MECCANO: a Mobile-Enabled Configuration Framework to Coordinate and Augment Networks of Smart Objects

    No full text
    In this paper, we exploit the capabilities of mobile devices as instruments to facilitate interaction in spaces populated with smart objects. We do this through MECCANO, a framework that supports an interaction method for a user to perform physical discovery and versatile configuration of behaviors involving a network of smart objects. Additionally, MECCANO guides the developer to easily integrate new augmented objects in the smart ecosystem. Behaviors are rule-based micro-services composed by a combination of events, conditions and actions that one or more smart objects can trigger, detect or perform. Each object owns and publishes its capabilities in a software module; this module becomes available when a user physically lies in the area of influence of the smart object. The capabilities provided by a specific object can be merged with those in other objects (including those in the user's mobile device itself) to configure a behavior involving several objects, adapted to the user's needs. On operation, the behavior is run within the mobile device, serving the device as orchestrator of the involved objects. The framework also facilitates sharing micro-services in such a way that users can act as prosumers by generating their self-made behaviors. New behaviors are associated to the classes of objects that are needed to execute them, becoming ready for other users to download. The proposed interaction method and its tools are demonstrated both from the developer's and the end-user's points of view, through practical implementations

    An object-oriented model for object orchestration in smart environments

    Full text link
    Nowadays, the heterogeneity of interconnected things and communication technologies creates several small worlds composed of a single object and a smartphone. For each object, the user needs to download a specific application, search and connect the device. The result is a waste of valuable resources: several objects are able to communicate with the smartphone, but they cannot directly interact among them. In this paper, we propose a model that can be used to define a set of standard interfaces suitable for every smart object. Devices that adhere to the same model can be easily controlled and placed in relation among them, creating multi-object behaviors for a smart space. The smartphone is still a control center, but with a single application it is possible to control and personalize spaces in a holistic way, instead of using the traditional one-to-one approach. Moreover, personalization should be portable: it is desirable that a behavior works in as many smart spaces as possible, at least in a similar way as it does in the environment in which it was configured, freeing the user from the tedious task of adapting it manually every time s/he goes to another space. A portable personalization extends the bring your own device paradigm to a new "bring your own space" paradigm. The model is inspired in the object-oriented programming, reinterpreting features such as inheritance and polymorphism to the real world, so it is possible to provide a software system able to adapt existing behaviors to new spaces. The use of the model is exemplified in the paper with two examples of smart spaces

    Review and Simulation of Counter-UAS Sensors for Unmanned Traffic Management

    No full text
    Noncollaborative surveillance of airborne UAS (Unmanned Aerial System) is a key enabler to the safe integration of UAS within a UTM (Unmanned Traffic Management) ecosystem. Thus, a wide variety of new sensors (known as Counter-UAS sensors) are being developed to provide real-time UAS tracking, ranging from radar, RF analysis and image-based detection to even sound-based sensors. This paper aims to discuss the current state-of-the art technology in this wide variety of sensors (both academically and commercially) and to propose a set of simulation models for them. Thus, the review is focused on identifying the key parameters and processes that allow modeling their performance and operation, which reflect the variety of measurement processes. The resulting simulation models are designed to help evaluate how sensors’ performances affect UTM systems, and specifically the implications in their tracking and tactical services (i.e., tactical conflicts with uncontrolled drones). The simulation models cover probabilistic detection (i.e., false alarms and probability of detection) and measurement errors, considering equipment installation (i.e., monostatic vs. multistatic configurations, passive sensing, etc.). The models were integrated in a UTM simulation platform and simulation results are included in the paper for active radars, passive radars, and acoustic sensors

    Experimental prototype for remote tower systems design

    Get PDF
    Small and medium-sized airports are poorly occupied or occupied for a small period of time, hence the remote control towers are a good solution to control air traffic in these airports. A surveillance and monitoring air traffic prototype has been built to test surveillance, control and visualization concepts. The prototype is comprised of a hybrid real-synthetic scenario using augmented reality techniques to evaluate the operation of the surveillance/vision systems, observing the behavior of this system with different targets, backgrounds, inclement weather, etc. and helping the design/experimentation of novel control procedures in the airport area. This allows the analysis of risky situations and the controller training without putting at risk neither people nor material goods

    Sensors and Communication Simulation for Unmanned Traffic Management

    No full text
    Unmanned traffic management (UTM) systems will become a key enabler to the future drone market ecosystem, enabling the safe concurrent operation of both manned and unmanned aircrafts. Currently, these systems are usually tested by performing real scenarios that are costly, limited, hardly scalable, and poorly repeatable. As a solution, in this paper we propose an agent-based simulation platform, implemented through a micro service architecture, which may simulate UTM information sources, such as flight plans, telemetry messages, or tracks from a surveillance network. The final objective of this simulator is to use these information streams to perform a system-level evaluation of UTM systems both in the pre-flight and in-flight stages. The proposed platform, with a focus on simulation of communications and sensors, allows to model UTM actors’ behaviors and their interactions. In addition, it also considers the manual definition of events to simulate unexpected behaviors/events (contingencies), such as communications failures or pilots’ actions. In order to validate our architecture, we implemented a simulator that considers the following actors: drones, pilots, ground control stations, surveillance networks, and communications networks. This platform enables the simulation of the drone trajectory and control, the C2 (command and control) link, drone detection by surveillance sensors, and the communication of all agents by means of a mobile communications network. Our results show that it is possible to truthfully recreate complex scenarios using this simulator, mitigating the disadvantages of real testbeds
    corecore